MINA: Multi-Input Network Augmentation for Enhancing Tiny Deep Learning

نویسندگان

چکیده

Network Augmentation (NetAug) is a recent method used to improve the performance of tiny neural networks on large-scale datasets. This provides additional supervision models from larger augmented models, mitigating issue underfitting. However, capacity not fully utilized, resulting in underutilization resources. In order utilize model without exacerbating underfitting model, we propose new called Multi-Input (MINA). MINA converts into multi-input configuration, allowing only receive more diverse inputs during training. Additionally, network can be converted back their original single-input configuration after Our extensive experiments datasets demonstrate that effective improving networks. We also consistently downstream tasks, such as fine-grained image classification tasks and object detection tasks.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Data Augmentation Using Multi-Input Multi-Output Source Separation for Deep Neural Network Based Acoustic Modeling

We investigate the use of local Gaussian modeling (LGM) based source separation to improve speech recognition accuracy. Previous studies have shown that the LGM based source separation technique has been successfully applied to the runtime speech enhancement and the speech enhancement of training data for deep neural network (DNN) based acoustic modeling. In this paper, we propose a data augmen...

متن کامل

Multi-Focus Attention Network for Efficient Deep Reinforcement Learning

Deep reinforcement learning (DRL) has shown incredible performance in learning various tasks to the human level. However, unlike human perception, current DRL models connect the entire low-level sensory input to the state-action values rather than exploiting the relationship between and among entities that constitute the sensory input. Because of this difference, DRL needs vast amount of experi...

متن کامل

Input Fast-Forwarding for Better Deep Learning

This paper introduces a new architectural framework, known as input fast-forwarding, that can enhance the performance of deep networks. The main idea is to incorporate a parallel path that sends representations of input values forward to deeper network layers. This scheme is substantially different from “deep supervision,” in which the loss layer is re-introduced to earlier layers. The parallel...

متن کامل

Improving Deep Learning using Generic Data Augmentation

Deep artificial neural networks require a large corpus of training data in order to effectively learn, where collection of such training data is often expensive and laborious. Data augmentation overcomes this issue by artificially inflating the training set with label preserving transformations. Recently there has been extensive use of generic data augmentation to improve Convolutional Neural N...

متن کامل

A Bayesian Data Augmentation Approach for Learning Deep Models

Data augmentation is an essential part of the training process applied to deep learning models. The motivation is that a robust training process for deep learning models depends on large annotated datasets, which are expensive to be acquired, stored and processed. Therefore a reasonable alternative is to be able to automatically generate new annotated training samples using a process known as d...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Access

سال: 2023

ISSN: ['2169-3536']

DOI: https://doi.org/10.1109/access.2023.3319313